Skip to content

[Bug]: Function 'execute_code' not found in available tools #8805

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
atasever opened this issue May 30, 2025 · 4 comments
Open
1 task done

[Bug]: Function 'execute_code' not found in available tools #8805

atasever opened this issue May 30, 2025 · 4 comments
Labels
bug Something isn't working llm Related to specific LLMs

Comments

@atasever
Copy link

Is there an existing issue for the same bug? (If one exists, thumbs up or comment on the issue instead).

  • I have checked the existing issues.

Describe the bug and reproduction steps

I am getting the following error when using devstral as LLM.

Function 'execute_code' not found in available tools: ['execute_bash', 'think', 'finish', 'browser', 'execute_ipython_cell', 'str_replace_editor', 'fetch']

OpenHands Installation

Docker command in README

OpenHands Version

0.39

Operating System

MacOS

Logs, Errors, Screenshots, and Additional Context

08:05:40 - OBSERVATION
[Agent Controller b86993b5f3e5409c805117a19aa11057] ErrorObservation
Function 'execute_code' not found in available tools: ['execute_bash', 'think', 'finish', 'browser', 'execute_ipython_cell', 'str_replace_editor', 'fetch']

@atasever atasever added the bug Something isn't working label May 30, 2025
@enyst
Copy link
Collaborator

enyst commented May 30, 2025

I think what happens is that devstral is hallucinating such tool. It's possible it's just a fluke, and it continues by correcting itself?

@lowlyocean
Copy link

lowlyocean commented May 30, 2025

I just saw this as well, except the tool it tried is "execute_command". Would native tool calling help prevent this to some degree? I tried setting LLM_NATIVE_TOOL_CALLING true but get this previously reported error

I'm using a version of Devstral that is listed by ollama as having tool support:

ollama show hf.co/unsloth/Devstral-Small-2505-GGUF:Q2_K_numgpu41
  Model
    architecture        llama
    parameters          23.6B
    context length      131072
    embedding length    5120
    quantization        Q2_K

  Capabilities
    completion
    tools
    vision

  Projector
    architecture        clip
    parameters          438.96M
    embedding length    1024
    dimensions          5120

  Parameters
    num_gpu        41
    stop           "<s>"
    stop           "[INST]"
    temperature    0.15
    min_p          0.01
    num_ctx        32768

This is the Modelfile that created this model. It works quite well in OpenHands most of the time, but I suspect native calling could help with these rare hallucinations

FROM hf.co/unsloth/Devstral-Small-2505-GGUF:Q2_K
PARAMETER num_gpu 41
PARAMETER num_ctx 32768
PARAMETER temperature 0.15
PARAMETER min_p 0.01
TEMPLATE """{{- range $index, $_ := .Messages }}
{{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT]
{{- else if eq .Role "user" }}
{{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS]{{ $.Tools }}[/AVAILABLE_TOOLS]
{{- end }}[INST]{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}
{{- if .Content }}{{ .Content }}
{{- if not (eq (len (slice $.Messages $index)) 1) }}</s>
{{- end }}
{{- else if .ToolCalls }}[TOOL_CALLS][
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{- end }}]</s>
{{- end }}
{{- else if eq .Role "tool" }}[TOOL_RESULTS]{"content": {{ .Content }}}[/TOOL_RESULTS]
{{- end }}
{{- end }}"""

@enyst
Copy link
Collaborator

enyst commented May 30, 2025

As noted in the linked issue, we have found devstral to work better without native tool use. I don't know what kind of issues were there, but now that you see errors, I do wonder if errors from litellm may well be related to it. 🤔

@lowlyocean
Copy link

lowlyocean commented May 30, 2025

In your results spreadsheet, none of the native function calling models are ones that are locally run in Ollama. Do you have a single Ollama model that's known to work with OpenHands while LLM_NATIVE_TOOL_CALLING is true? That would help narrow down if it's a model-specific issue, or a more general issue with how LiteLLM is being used by OpenHands

@mamoodi mamoodi added the llm Related to specific LLMs label Jun 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working llm Related to specific LLMs
Projects
None yet
Development

No branches or pull requests

4 participants